Skip to content

Conversation

@ngxson
Copy link
Collaborator

@ngxson ngxson commented Aug 12, 2024

Resolves #8974

An error message will be shown if input lora adapter is quantized. Only quantized base model is supported for now.


@ngxson ngxson requested a review from ggerganov August 12, 2024 09:09
@ngxson ngxson merged commit 828d6ff into ggml-org:master Aug 13, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: llama-export-lora fails merging a T5 model with its LoRA adapter

2 participants